Enable re-use of existing private hosted zone in AWS#4682
Enable re-use of existing private hosted zone in AWS#4682mihail-milev wants to merge 2 commits intoopenshift:masterfrom mihail-milev:feature/aws-enable-use-existing-privatezone
Conversation
This commit allows by modifying the cluster-dns-02-config.yml manifest
file to ask the installer to re-use an already existing private AWS
hosted zone in Route53.
In the data/data/aws/route53/base.tf file a new "data" section was
added and based on the value of a TF variable either the "data" section
or the "ressource" section is being used. If the TF variable is set
to Golang's string zero value ("") then a new hosted zone will be
created (default behavior), otherwise, if not string zero, then TF will
try to use the hosted zone, which ID corresponds to the one specified
in the TF variable.
In order to activate the mechanism:
1) specify "publish: Internal" inside the install-config.yaml
2) generate the manifests
3) modify manifests/cluster-dns-02-config.yml by:
3.1) removing the "tags" section from "privateZone"
3.2) adding "id" string value inside the "privateZone". Set the value
of this string to the hosted zone ID ("Z0123QQQZXX..."), you
wish to reuse
4) execute "openshift-install create cluster ..."
This is exactly what is described in this file:
docs/user/aws/install_upi.md
under the chapter "Identify the internal DNS zone"
This mechanism could be extended for other cloud providers, e.g.
Azure or GCP, but I don't have the ressources to do it.
Run the following commands from the CONTRIBUTING documentation
and corrected the errors:
hack/go-fmt.sh .
hack/go-lint.sh $(go list -f '{{ .ImportPath }}' ./...)
hack/go-vet.sh ./...
hack/shellcheck.sh
hack/tf-fmt.sh -list -check
hack/tf-lint.sh
hack/yaml-lint.sh
hack/go-test.sh
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @mihail-milev. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Thank you for your PR. Unfortunately, this is not a change that we will be accepting. We deliberately do not provide all possible configurations in the installer-provisioned installation flow. The intention is for the options available to be curated to a small set that is useful for a large share of our users or difficult to work around in other ways. |
Hello, I do understand the urge to keep it as simple as possible and to make it easy for the users to deploy a cluster. That is the reason, why I also kept the default behavior of the installer - just execute it like you are doing it until now, and it will work as it worked until now. But there are some "power users" over there, which deploy > 10 clusters a day and there is the need to do customization sometimes. Most of the time there are restrictions (like e.g. not allowed to create new hosted zones) and every time "workarounds" need to be done. That is not oriented towards the users - that is against the users. What would be the "other ways" to solve this problem? |
The "other ways" to solve this problem is to take the user-provisioned infrastructure approach. It is precisely for those "power users" with special requirements that we offer that approach, where the user has more control over what is and is not provisioned. You could start with and modify to your needs the CloudFormation from https://github.com/openshift/installer/tree/master/upi/aws/cloudformation or even the terraform that the installer uses to do an installer-provisioned infrastructure installation. Or you can use whatever other tools you prefer to provision the infrastructure. |
|
/close per above. |
|
Bring-your-own private hosted zone is being implemented as a configurable field in the install config with #4772. |
|
@staebler: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This modification allows by modifying the cluster-dns-02-config.yml manifest
file to ask the installer to re-use an already existing private AWS
hosted zone in Route53.
In the data/data/aws/route53/base.tf file a new "data" section was
added and based on the value of a TF variable either the "data" section
or the "ressource" section is being used. If the TF variable is set
to Golang's string zero value ("") then a new hosted zone will be
created (default behavior), otherwise, if not string zero, then TF will
try to use the hosted zone, which ID corresponds to the one specified
in the TF variable.
In order to activate the mechanism:
3.1) removing the "tags" section from "privateZone"
3.2) adding "id" string value inside the "privateZone". Set the value
of this string to the hosted zone ID ("Z0123QQQZXX..."), you
wish to reuse
This is exactly what is described in this file:
docs/user/aws/install_upi.md
under the chapter "Identify the internal DNS zone"
This mechanism could be extended for other cloud providers, e.g.
Azure or GCP, but I don't have the resources to do it.